Goto

Collaborating Authors

 learning bayesian network


Learning Bayesian Networks with Low Rank Conditional Probability Tables

Neural Information Processing Systems

In this paper, we provide a method to learn the directed structure of a Bayesian network using data. The data is accessed by making conditional probability queries to a black-box model. We introduce a notion of simplicity of representation of conditional probability tables for the nodes in the Bayesian network, that we call ` low rank` Bayesian network using very few queries. We formally prove that our method correctly recovers the true directed structure, runs in polynomial time and only needs polynomial samples with respect to the number of nodes. We also provide further improvements in efficiency if we have access to some observational data.


DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

Neural Information Processing Systems

This paper re-examines a continuous optimization framework dubbed NOTEARS for learning Bayesian networks. We first generalize existing algebraic characterizations of acyclicity to a class of matrix polynomials. Next, focusing on a one-parameter-per-edge setting, it is shown that the Karush-Kuhn-Tucker (KKT) optimality conditions for the NOTEARS formulation cannot be satisfied except in a trivial case, which explains a behavior of the associated algorithm. We then derive the KKT conditions for an equivalent reformulation, show that they are indeed necessary, and relate them to explicit constraints that certain edges be absent from the graph. If the score function is convex, these KKT conditions are also sufficient for local minimality despite the non-convexity of the constraint. Informed by the KKT conditions, a local search post-processing algorithm is proposed and shown to substantially and universally improve the structural Hamming distance of all tested algorithms, typically by a factor of 2 or more. Some combinations with local search are both more accurate and more efficient than the original NOTEARS.


Learning Bayesian networks with ancestral constraints

Neural Information Processing Systems

We consider the problem of learning Bayesian networks optimally, when subject to background knowledge in the form of ancestral constraints. Our approach is based on a recently proposed framework for optimal structure learning based on non-decomposable scores, which is general enough to accommodate ancestral constraints. The proposed framework exploits oracles for learning structures using decomposable scores, which cannot accommodate ancestral constraints since they are non-decomposable. We show how to empower these oracles by passing them decomposable constraints that they can handle, which are inferred from ancestral constraints that they cannot handle. Empirically, we demonstrate that our approach can be orders-of-magnitude more efficient than alternative frameworks, such as those based on integer linear programming.




Advances in Learning Bayesian Networks of Bounded Treewidth

Neural Information Processing Systems

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.


Learning Bayesian Networks with Thousands of Variables

Neural Information Processing Systems

We present a method for learning Bayesian networks from data sets containingthousands of variables without the need for structure constraints. Our approachis made of two parts. The first is a novel algorithm that effectively explores thespace of possible parent sets of a node.


Reviews: Learning Bayesian Networks with Low Rank Conditional Probability Tables

Neural Information Processing Systems

The paper introduces a notion of low rank conditional probability tables (CPTs). Overall the reviewers found the results innovative, interesting and theoretically justified. Many of the reviewers concerns were properly addressed in the rebuttal. Several reviewers stated that more empirical work could have greatly benefit the paper.


Review for NeurIPS paper: DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

Neural Information Processing Systems

Weaknesses: The problem in the paper is that it fails in showing the actual scope of the new results, especially in the global context of BNs learning. In fact their methods apparently can only applied to the continuous case: no mention is ever made if the same method can work with categorical variables. This is reflected to the selected set of "state-of-the-art" methods against which they compare their methods, that is a narrow subset of the whole literature on BNs learning. Saying something like "As mentioned, this paper is most closely related to the fully continuous framework of ... " is definitely not enough: a more precise and thorough description of the limitations of this work, and its position in the whole BNs learning literature, is needed. The title and the abstract should modified as well with same reasoning.


Review for NeurIPS paper: DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

Neural Information Processing Systems

After discussions, there has been consensus that the paper's ideas deserve publication, even though they are somewhat incremental and without guarantees, as they build on NOTEARS. It has been appreciated the discussion on the issues with NOTEARS and an attempt to improve on them.